-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Team Experiments Q1 2025 goals #10144
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like it usually makes more sense - and is easier for people to get excited about - if they're responsible for a single objective (or sometimes two).
So instead of having 5 objectives and everyone responsible for all 5, is there a way to divvy these up and find a theme within them? eg do some of the "new features" also fall into the "Data and statistics" section, and there's one person who's going to take the bulk of that work on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
easier for people to get excited about - if they're responsible for a single objective
I agree, and I’ve been thinking about that too, but I’ve been struggling to come up with such a structure.
For example, Daniel has been significantly contributing across all five areas, and I suspect he’ll continue doing so next quarter. So what’s the point of assigning Anders to "Data & Statistics" when we’ll all likely end up working on it anyway?
Or, Anders will be working on the new Data Collection calculator, which suits him well since it’s a statistical problem. But it’s also a UI/UX problem, and he’ll implement this end to end.
I guess I’m just struggling with how to break these up so that a single person is responsible for each big objective. We could have one person responsible for an objective with others assigned to subtasks, but what’s the significance of that? Would the objective owner somehow oversee the subtasks? I feel like that’s not really how we work here.
Anyway, I’m open to more feedback on how to restructure this 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a good start! I left some questions to help facilitate further conversation
We will review our current methodology to ensure it is both accurate and easy for our users to understand. | ||
### Q1 2025 Objectives | ||
|
||
This quarter, we will bring new feature improvements to make Experiments more advanced. We'll also focus on improving quality across different areas - data, documentation, and codebase - to help us move faster and deliver a better product. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What features do our competitors have that we don't, and which of those features are the most important to close the feature gap/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposed new features are ones that:
- Our major competitors already have
- Our large/priority customers have requested
I’ve used my judgment to prioritize them, taking into account how much we can realistically build in a single quarter.
|
||
#### Objective 1: New features | ||
- [Timeseries chart for deltas and credible intervals](https://github.com/PostHog/posthog/issues/26931) @jurajmajerik | ||
- [Winsorization](https://github.com/PostHog/posthog/issues/26060) to exclude extreme values from analyses. @andehen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How frequently is this a problem for our customers? What % of customers would this impact?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you're asking about Winsorization here. We don’t have data for it, but I suspect many of our users won’t even notice there’s a data quality issue caused by outliers. We absolutely need to solve this, as it impacts correctness. Even if it affects only 10% of experiments, that means 10% of experiments are producing unreliable results. Correctness is a top priority for experiments, and we should proactively fix this, even if customers aren’t explicitly asking for it or aware of the issue.
- Improve diagnostics to provide more detailed information, like flagging missing exposure data. @andehen | ||
|
||
#### Objective 3: Data Warehouse integration @danielbachhuber | ||
- Ensure a great experience for our pilot users. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How might you quantify "ensure a great experience for our pilot users"? Also, what does success look like w/r/t adoption numbers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to clarify this further and think about what you'd like it to be. In past quarterly plannings, I've seen objectives like "Get positive feedback from 5 pilot customers" specified.
I took another pass at this. Every objective except the first one has a clear owner. Collaboration is still encouraged of course! @danielbachhuber, I figure you'll finish some of the Data & statistics work this quarter. We can update it later based on how it goes. |
Changes
Team Experiments Q1 goals. Please have a look and feel free to suggest any changes.
Checklist